169 research outputs found
Recommended from our members
BARTER: Profile Model Exchange for Behavior-Based Access Control and Communication Security in MANETs
There is a considerable body of literature and technology that provides access control and security of communication for Mobile Ad-hoc Networks (MANETs) based on cryptographic authentication technologies and protocols. We introduce a new method of granting access and securing communication in a MANET environment to augment, not replace, existing techniques. Previous approaches grant access to the MANET, or to its services, merely by means of an authenticated identity or a qualified role. We present BARTER, a framework that, in addition, requires nodes to exchange a model of their behavior to grant access to the MANET and to assess the legitimacy of their subsequent communication. This framework forces the nodes not only to say who or what they are, but also how they behave. BARTER will continuously run membership acceptance and update protocols to give access to and accept traffic only from nodes whose behavior model is considered "normal" according to the behavior model of the nodes in the MANET. We implement and experimentally evaluate the merger between BARTER and other cryptographic technologies and show that BARTER can implement a fully distributed automatic access control and update with small cryptographic costs. Although the methods proposed involve the use of content-based anomaly detection models, the generic infrastructure implementing the methodology may utilize any behavior model. Even though the experiments are implemented for MANETs, the idea of model exchange for access control can be applied to any type of network
Recommended from our members
BARTER: Behavior Profile Exchange for Behavior-Based Admission and Access Control in MANETs
Mobile Ad-hoc Networks (MANETs) are very dynamic networks with devices continuously entering and leaving the group. The highly dynamic nature of MANETs renders the manual creation and update of policies associated with the initial incorporation of devices to the MANET (admission control) as well as with anomaly detection during communications among members (access control) a very difficult task. In this paper, we present BARTER, a mechanism that automatically creates and updates admission and access control policies for MANETs based on behavior profiles. BARTER is an adaptation for fully distributed environments of our previously introduced BB-NAC mechanism for NAC technologies. Rather than relying on a centralized NAC enforcer, MANET members initially exchange their behavior profiles and compute individual local definitions of normal network behavior. During admission or access control, each member issues an individual decision based on its definition of normalcy. Individual decisions are then aggregated via a threshold cryptographic infrastructure that requires an agreement among a fixed amount of MANET members to change the status of the network. We present experimental results using content and volumetric behavior profiles computed from the ENRON dataset. In particular, we show that the mechanism achieves true rejection rates of 95% with false rejection rates of 9%
Behavior-Profile Clustering for False Alert Reduction in Anomaly Detection Sensors
Anomaly detection (AD) sensors compute behavior profiles to recognize malicious or anomalous activities. The behavior of a host is checked continuously by the AD sensor and an alert is raised when the behavior deviates from its behavior profile. Unfortunately, the majority of AD sensors suffer from high volumes of false alerts either maliciously crafted by the host or originating from insufficient training of the sensor. We present a cluster-based AD sensor that relies on clusters of behavior profiles to identify anomalous behavior. The behavior of a host raises an alert only when a group of host profiles with similar behavior (cluster of behavior profiles) detect the anomaly, rather than just relying on the host's own behavior profile to raise the alert (single-profile AD sensor). A cluster-based AD sensor significantly decreases the volume of false alerts by providing a more robust model of normal behavior based on clusters of behavior profiles. Additionally, we introduce an architecture designed for the deployment of cluster-based AD sensors. The behavior profile of each network host is computed by its closest switch that is also responsible for performing the anomaly detection for each of the hosts in its subnet. By placing the AD sensors at the switch, we eliminate the possibility of hosts crafting malicious alerts. Our experimental results based on wireless behavior profiles from users in the CRAWDAD dataset show that the volume of false alerts generated by cluster-based AD sensors is reduced by at least 50% compared to single-profile AD sensors
Recommended from our members
Data Sanitization: Improving the Forensic Utility of Anomaly Detection Systems
Anomaly Detection (AD) sensors have become an invaluable tool for forensic analysis and intrusion detection. Unfortunately, the detection accuracy of all learning-based ADs depends heavily on the quality of the training data, which is often poor, severely degrading their reliability as a protection and forensic analysis tool. In this paper, we propose extending the training phase of an AD to include a sanitization phase that aims to improve the quality of unlabeled training data by making them as "attack-free" and "regular" as possible in the absence of absolute ground truth. Our proposed scheme is agnostic to the underlying AD, boosting its performance based solely on training-data sanitization. Our approach is to generate multiple AD models for content-based AD sensors trained on small slices of the training data. These AD "micro-models" are used to test the training data, producing alerts for each training input. We employ voting techniques to determine which of these training items are likely attacks. Our preliminary results show that sanitization increases 0-day attack detection while maintaining a low false positive rate, increasing confidence to the AD alerts. We perform an initial characterization of the performance of our system when we deploy sanitized versus unsanitized AD systems in combination with expensive host-based attack-detection systems. Finally, we provide some preliminary evidence that our system incurs only an initial modest cost, which can be amortized over time during online operation
Recommended from our members
Ethics in Security Vulnerability Research
Debate has arisen in the scholarly community, as well as among policymakers and business entities, regarding the role of vulnerability researchers and security practitioners as sentinels of information security adequacy. The exact definition of vulnerability research and who counts as a "vulnerability researcher" is a subject of debate in the academic and business communities. For purposes of this article, we presume that vulnerability researchers are driven by a desire to prevent information security harms and engage in responsible disclosure upon discovery of a security vulnerability. Yet provided that these researchers and practitioners do not themselves engage in conduct that causes harm, their conduct doesn't necessarily run afoul of ethical and legal considerations. We advocate crafting a code of conduct for vulnerability researchers and practitioners, including the implementation of procedural safeguards to ensure minimization of harm
Recommended from our members
An Email Worm Vaccine Architecture
We present an architecture for detecting "zero-day" worms and viruses in incoming email. Our main idea is to intercept every incoming message, pre-scan it for potentially dangerous attachments, and only deliver messages that are deemed safe. Unlike traditional scanning techniques that rely on some form of pattern matching (signatures), we use behavior-based anomaly detection. Under our approach, we "open" all suspicious attachments inside an instrumented virtual machine looking for dangerous actions, such as writing to the Windows registry, and flag suspicious messages. The attachment processing can be offloaded to a cluster of ancillary machines (as many as are needed to keep up with a site's email load), thus not imposing any computational load on the mail server. Messages flagged are put in a "quarantine" area for further, more labor-intensive processing. Our implementation shows that we can use a large number of malware-checking VMs operating in parallel to cope with high loads. Finally, we show that we are able to detect the actions of all malicious software we tested, while keeping the false positive rate to under 5%
Bait and Snitch: Defending Computer Systems with Decoys
Threats against computer networks continue to multiply, but existing security solutions are persistently unable to keep pace with these challenges. In this paper we present a new paradigm for securing computational resources which we call decoy technology. This technique involves seeding a system with data that appears authentic but is in fact spurious. Attacks can then be detected by monitoring this phony information for access events. Decoys are capable of detecting malicious activity, such as insider and masquerade attacks, that are beyond the scope of traditional security measures. They can be used to address confidentiality breaches either proactively or after they have taken place. This work examines the challenges that must be overcome in order to successfully deploy decoys as part of a comprehensive security solution. It discusses situations where decoys are particularly useful as well as characteristics that effective decoy material should share. Furthermore, we describe the tools that we have developed to efficiently craft and distribute decoys in order to form a network of sensors that is capable of detecting adversarial action that occurs anywhere in an organizations system
Data Sanitization: Improving the Forensic Utility of Anomaly Detection Systems
Anomaly Detection (AD) sensors have become an invaluable tool for forensic analysis and intrusion detection. Unfortunately, the detection performance of all learning-based ADs depends heavily on the quality of the training data. In this paper, we extend the training phase of an AD to include a sanitization phase. This phase significantly improves the quality of unlabeled training data by making them as "attack-free"Â as possible in the absence of absolute ground truth. Our approach is agnostic to the underlying AD, boosting its performance based solely on training-data sanitization. Our approach is to generate multiple AD models for content-based AD sensors trained on small slices of the training data. These AD "micro-models"Â are used to test the training data, producing alerts for each training input. We employ voting techniques to determine which of these training items are likely attacks. Our preliminary results show that sanitization increases 0-day attack detection while in most cases reducing the false positive rate. We analyze the performance gains when we deploy sanitized versus unsanitized AD systems in combination with expensive hostbased attack-detection systems. Finally, we show that our system incurs only an initial modest cost, which can be amortized over time during online operation
Recommended from our members
STAND: Sanitization Tool for ANomaly Detection
The efficacy of Anomaly Detection (AD) sensors depends heavily on the quality of the data used to train them. Artificial or contrived training data may not provide a realistic view of the deployment environment. Most realistic data sets are dirty; that is, they contain a number of attacks or anomalous events. The size of these high-quality training data sets makes manual removal or labeling of attack data infeasible. As a result, sensors trained on this data can miss attacks and their variations. We propose extending the training phase of AD sensors (in a manner agnostic to the underlying AD algorithm) to include a sanitization phase. This phase generates multiple models conditioned on small slices of the training data. We use these "micro-models"Â to produce provisional labels for each training input, and we combine the micro-models in a voting scheme to determine which parts of the training data may represent attacks. Our results suggest that this phase automatically and significantly improves the quality of unlabeled training data by making it as "attack-free"Â and "regular"Â as possible in the absence of absolute ground truth. We also show how a collaborative approach that combines models from different networks or domains can further refine the sanitization process to thwart targeted training or mimicry attacks against a single site
- …